Goto

Collaborating Authors

 functional encryption




Partially Encrypted Deep Learning using Functional Encryption

Neural Information Processing Systems

Machine learning on encrypted data has received a lot of attention thanks to recent breakthroughs in homomorphic encryption and secure multi-party computation. It allows outsourcing computation to untrusted servers without sacrificing privacy of sensitive data. We propose a practical framework to perform partially encrypted and privacy-preserving predictions which combines adversarial training and functional encryption. We first present a new functional encryption scheme to efficiently compute quadratic functions so that the data owner controls what can be computed but is not involved in the calculation: it provides a decryption key which allows one to learn a specific function evaluation of some encrypted data. We then show how to use it in machine learning to partially encrypt neural networks with quadratic activation functions at evaluation time and we provide a thorough analysis of the information leaks based on indistinguishability of data items of the same label. Last, since several encryption schemes cannot deal with the last thresholding operation used for classification, we propose a training method to prevent selected sensitive features from leaking which adversarially optimizes the network against an adversary trying to identify these features. This is of great interest for several existing works using partially encrypted machine learning as it comes with almost no cost on the model's accuracy and significantly improves data privacy.


Privacy-Preserving Federated Learning from Partial Decryption Verifiable Threshold Multi-Client Functional Encryption

Wang, Minjie, Han, Jinguang, Meng, Weizhi

arXiv.org Artificial Intelligence

In federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data, but the gradient leakage problem still threatens the privacy security and model integrity. Although the existing scheme uses threshold cryptography to mitigate the inference attack, it can not guarantee the verifiability of the aggregation results, making the system vulnerable to the threat of poisoning attack. We construct a partial decryption verifiable threshold multi client function encryption scheme, and apply it to Federated learning to implement the federated learning verifiable threshold security aggregation protocol (VTSAFL). VTSAFL empowers clients to verify aggregation results, concurrently minimizing both computational and communication overhead. The size of the functional key and partial decryption results of the scheme are constant, which provides efficiency guarantee for large-scale deployment. The experimental results on MNIST dataset show that vtsafl can achieve the same accuracy as the existing scheme, while reducing the total training time by more than 40%, and reducing the communication overhead by up to 50%. This efficiency is critical for overcoming the resource constraints inherent in Internet of Things (IoT) devices.



Partially Encrypted Machine Learning using Functional Encryption

Neural Information Processing Systems

We graciously thank the reviewers for their helpful comments. We clarify some details of the article below. In fact, this article shows that even if FE isn't as mature as homomorphic We do detail and reference many notions from cryptology. ML community may not be familiar with those new concepts, and we sought to introduce them carefully and rigorously. In return, classical notions of ML do not need to be referenced as much because they are well established.


Functional Encryption in Secure Neural Network Training: Data Leakage and Practical Mitigations

Ioniţă, Alexandru, Ioniţă, Andreea

arXiv.org Artificial Intelligence

With the increased interest in artificial intelligence, Machine Learning as a Service provides the infrastructure in the Cloud for easy training, testing, and deploying models. However, these systems have a major privacy issue: uploading sensitive data to the Cloud, especially during training. Therefore, achieving secure Neural Network training has been on many researchers' minds lately. More and more solutions for this problem are built around a main pillar: Functional Encryption (FE). Although these approaches are very interesting and offer a new perspective on ML training over encrypted data, some vulnerabilities do not seem to be taken into consideration. In our paper, we present an attack on neural networks that uses FE for secure training over encrypted data. Our approach uses linear programming to reconstruct the original input, unveiling the previous security promises. To address the attack, we propose two solutions for secure training and inference that involve the client during the computation phase. One approach ensures security without relying on encryption, while the other uses function-hiding inner-product techniques.


EFU: Enforcing Federated Unlearning via Functional Encryption

Mohammadi, Samaneh, Tsouvalas, Vasileios, Symeonidis, Iraklis, Balador, Ali, Ozcelebi, Tanir, Flammini, Francesco, Meratnia, Nirvana

arXiv.org Artificial Intelligence

Federated unlearning (FU) algorithms allow clients in federated settings to exercise their ''right to be forgotten'' by removing the influence of their data from a collaboratively trained model. Existing FU methods maintain data privacy by performing unlearning locally on the client-side and sending targeted updates to the server without exposing forgotten data; yet they often rely on server-side cooperation, revealing the client's intent and identity without enforcement guarantees - compromising autonomy and unlearning privacy. In this work, we propose EFU (Enforced Federated Unlearning), a cryptographically enforced FU framework that enables clients to initiate unlearning while concealing its occurrence from the server. Specifically, EFU leverages functional encryption to bind encrypted updates to specific aggregation functions, ensuring the server can neither perform unauthorized computations nor detect or skip unlearning requests. To further mask behavioral and parameter shifts in the aggregated model, we incorporate auxiliary unlearning losses based on adversarial examples and parameter importance regularization. Extensive experiments show that EFU achieves near-random accuracy on forgotten data while maintaining performance comparable to full retraining across datasets and neural architectures - all while concealing unlearning intent from the server. Furthermore, we demonstrate that EFU is agnostic to the underlying unlearning algorithm, enabling secure, function-hiding, and verifiable unlearning for any client-side FU mechanism that issues targeted updates.


Reviews: Partially Encrypted Deep Learning using Functional Encryption

Neural Information Processing Systems

Summary of the work: This paper proposes a methodology to perform inference on encrypted data using functional evaluation. Authors develop a specific model consisting of private and public execution; the private (cypher-text) execution takes place a 2-layer perceptron with square activation functions in the hidden layer. The output of this 2-layer perceptron is revealed to the server, which runs another ML model to classify the input. Authors provide Functional Encryption tools to efficiently run the private part of the protocol. They also propose a strong points: - Authors clearly distinguish their work from other private inference scenarios: their target is applications where the client might not be "online" and cannot communicate in an SFE protocol.


Reviews: Partially Encrypted Deep Learning using Functional Encryption

Neural Information Processing Systems

Privacy in machine learning is being studied by many in the community due to its importance in many practical applications. Most studies use Homomorphic Encryptions or Secure Multi Party Computation to achieve privacy. This work uses Functional Encryption (FE) which is a different set of tools with different capabilities. I find this a great contribution since it may influence future research by demonstrating another plausible direction. Moreover, the authors present a new FE scheme, tailored to work well with machine learning workloads.